Assessing User Needs, Satisfaction, and Library Performance at the University of Washington Libraries
نویسنده
چکیده
THE UNIVERSITY LIBRARIES triennial OF WASHINGTON HAS CONDUCTED faculty and student library surveys since 1992. Surveys are sent to all faculty and a random sample of graduate and undergraduate students. Results have revealed significant variation within and between user groups concerning library satisfaction, use, priorities, and importance. There were 2,749 responses to the most recent survey in 1998, including more than 1,500completed surveys returned from faculty. These large-scale surveys, while extraordinarily valuable, have proven costly and time-consuming to design, administer, and analyze. The ARL LibQUAL+ pilot offered an opportunity to employ a different methodology and design that focused on quality of service and library support through a Web-based survey. This article discusses issues and results associated with these different approaches. INTRODUCTION The University of Washington Libraries (UW Libraries) has utilized a number of approaches during the past decade to assess the effectiveness of service programs and library support of faculty and student research, teaching, and learning. Among the most valuable methods employed have been large-scale surveys of faculty and students conducted every three years beginning in 1992. Focus groups, usability and observational studies, targeted surveys, and interviews are also used to assess library programs and services as well as user needs. Results from the triennial surveys Steve Hiller, Natural Sciences Library, Allen Library South, Ground and First Floors, Box 352900, University of Washington, Seattle, WA 98195-2900 LIBRARY TRENDS, Vol. 49, No. 4, Spring 2001, pp. 605-625 02001 The Board of Trustees, University of Illinois 606 LIBRARY TRENDS/SPRING 2001 have played a critical role in supporting the transition to a user-centered library (Wilson, 1995) and in creating a culture of assessment (Lakos, 1998).The large representative data sets generated by these surveys have also proven to be powerful information sources in the campus political environment. Survey results and analyses can be found at the UW Libraries’ Web site on user surveys: http://www.lib.washington.edu/surveys/. These surveys, though quite valuable, are expensive and time-consuming to design, administer, and analyze. Participation in the ARL-sponsored SERVQUAL (now LibQUALt) pilot provided an opportunity to use a wellestablished survey tool with a different methodology, design, content, and delivery mechanism. It also aiforded the chance for interinstitutional comparisons using a standardized survey instrument. Another attractive feature was the ability to gain experience with a Webbased survey that might reduce survey costs associated with printing, mailing, and data entry. This article will compare the UW Libraries’ surveys with LibQUAL+ results from the University of Washington in such areas as response and representativeness of survey population, similarities and differences in results, and whether the right questions are being asked. USERSURVEYS Library user surveys have become widespread in academic libraries during the past twenty years. Surveys have often been used as a tool to assess service quality and user satisfaction. The Association of Research Libraries issued four Systems and Procedures Exchange Center (SPEC) kits on user surveys and studies between 1981 and 1994 (Association of Research Libraries, 1981, 1984, 1988, 1994).A substantial body of literature has been developed on surveys and service quality, led by studies and reviews from such library educators/professionals as Hernon and McClure (1990); Van House, Weil and McClure (1990);Hernon and Altman (1998, 2000); Nitecki and Franklin (1999); and Hernon and Whitman (2001). Library applications of the SERVQUAL instrument have been covered by Nitecki (1996),and Cook and Heath (1999),among others. Rapid changes in library services and operations, demands for internal institutional accountability, and assessment expectations by external accrediting agencies have contributed to further development and application of user surveys within academic libraries during the past decade. User surveys can be designed and administered in a number of ways. Self-administered surveys are often employed to reach a large number of potential respondents with a minimum of direct contact and cost. Individuals are given or sent surveys to complete and return and the responses turned into data that can be analyzed. Surveys can range from broad and comprehensive to those narrowly focused on specific services or activities. When properly designed and administered, user surveys can provide both quantitative and qualitative data directly from the target population. HILLER/ASSESSING USER NEEDS 607 UNIVERSITY LIBRARIES’ SURVEY OF WASHINGTON METHODOLOGY AND DESIGN The University of Washington Libraries began an active program of assessing user needs, satisfaction, and the impact of library services and resources in 1992. Prior to this time, user input to the UW Libraries was generally informal and unsolicited through such channels as suggestion boxes and anecdotal comments from service desks. Other opportunities for user comment came through the Faculty Senate Council on University Libraries, a biennial meeting between subject selectors and faculty liaisons on collections-related issues and some earlier in-library surveys that focused on specific activities within the library unit. The catalyst for the development of a broad-based survey of faculty and students came from the UW Libraries’ first strategic plan in 1991 that called for a user-centered approach to services. Specifically, the strategic plan recommended that the libraries: “Develop and implement a study to identify user populations, their information needs and how well they are being met” (University of Washington Libraries, 1991, p. 15). The Task Force on Library Services was appointed by the Director of Libraries in late 1991 to design and implement a user survey that would provide information on the following: determine who users and potential users are; how and why the library is used (or isn’t used); what sources are used for library-related information; what faculty and students’ library-related needs are; and how satisfied faculty and students are with the libraries. The literature on academic library user surveys available at the time of the early 1990s revealed a wide spectrum of applications and uses (see Association of Research Libraries, 1984, 1981, 1991; Van House, Weil, & McClure, 1990). Some common characteristics of these surveys were: distribution within the library to users was more prevalent than mailed surveys; focus on physical use of the library (e.g., “what did you do in the library today?”) ; concentration on specific services (especially the online catalog); and interest in user satisfaction. The task force designed the initial survey in 1992 in consultation with library staff and the University’s Office of Educational Assessment (OM). The decision was made early in the design process to survey all user groups, distribute the survey through the mail in order to reach potential nonusers, and provide similar survey content for each group to enable comparisons. The survey would be sent to all faculty and a random sample of graduate and undergraduate students. While distributing the survey to 608 LIBRARY TRENDS/SPRING 2001 all faculty would increase costs, it would also facilitate survey promotion and publicity, obtain sufficient number of responses to do analysis by academic subject areas, and foster positive political outcomes. Survey questions were similar for faculty and graduate students, with about 75 percent consistency between faculty and undergraduates. Adequate space was provided for survey respondents to write comments. Content evolved with each subsequent survey in 1995 and 1998, and some aspects of survey design changed. Rapid changes in library services and programs during the 1990s and usefulness of the data provided by some questions were prime factors in survey revision. However, there was a core group of questions in each survey that dealt with: information sources needed for research, teaching, and learning; reasons and frequency of library use; campus computer network connectivity; use of electronic resources; instructional needs and effectiveness; library unit use; satisfaction; and services availability or satisfaction. The initial survey in 1992 was pilot tested in March with a group of faculty and students, revised, and then mailed mid-way through the Spring quarter to 3,900 faculty and a random, nonstratified, sample of 1,000graduate and 1,000 undergraduate students (sample size was based on an expected 50 percent return rate). An incentive (entry into a drawing for bookstore gift certificates) was offered to students who returned completed survey forms. Two weeks after the initial surveys were mailed, students were sent a second survey form, while faculty were sent a reminder notice. Completed surveys were returned to the Office of Educational Assessment (OEA) who arranged for data entry. Data were made available in SPSS format and results were available in early September 1992. Subsequent surveys in 1995 and 1998 generally employed a similar methodology and design. Survey design work began in January of each year, pilot testing took place in March, and surveys were mailed in late April to early May. The undergraduate sample was increased to 2,000 for 1995 and 1998, and the 1998 survey also included a specialized set of questions for faculty and graduate students in the biological and health sciences, and one for faculty and students in the fine arts. Focus groups were also held prior to the 1998 survey to provide input from users on their perception of issues and concerns. The bookstore gift certificate drawing was extended to all groups beginning with the 1995 survey. Reminder notices were sent in 1995 but not a follow-up survey form. In 1998, a survey accompanied the reminder letter. Both the cover letter and survey form included the name, phone number, and e-mail address of a librarian HILLER/ASSESSING USER NEEDS 609 as a contact person for questions or clarification. The few questions received generally requested another survey be sent to replace a lost one. Sending this type of survey to nearly 7,000 faculty and students is not inexpensive. Direct survey costs (not including library staff time) in 1998 totaled $19,000, about $7 per returned survey. Survey costs in 1992 and 1995 were about $12,000. The 1998 costs were distributed in the following manner: printing 30 percent; mailing 30 percent; data entry 30 percent; other 10 percent (consultation, incentives). Staff time for the 1998 survey was estimated at approximately 500 hours, including analysis and reporting. LIBQUAL+ The UW Libraries was one of twelve libraries that participated in the ARL-sponsored LibQUAL+ pilot administered in Spring 2000 (Cook, Heath, & Thompson, 2000a, 2000b). Survey design and methodology were handled primarily by a team from Texas A&M where a SERVQUALbased library survey had been used several times (Cook & Heath, 1999). In addition to the twenty-two basic SERVQUAL questions which covered the standard dimensions of accountability, assurance, reliability, responsiveness, and tangibles, nineteen additional questions were added to test two additional dimensions: access to collections and the library as place. Thus, there were forty-one questions that used the SERVQUAL three-column response format of minimum, perceived, and desired. Another fourteen behavioral questions, two on frequency of library use, and an overall service quality question were also added which used just one response column. The survey also collected demographic data. The survey team at Texas A&M determined that the survey be administered to a random sample of 600 faculty, 600 graduate students, and 900 undergraduates at each institution based on an anticipated return of 200 surveys from each group. The UW Office of Educational Assessment extracted the sample from the faculty and student databases, and e-mail address lists created for each group were sent to the UW Libraries. The UW Libraries systems office created separate mailing lists for each group. A cover letter from the director of the UW Libraries was sent by e-mail to each participant. The letter included information about the survey and the university's reasons for participation, and also provided a URL address where respondents could complete the survey. The initial message was sent May 2 and a reminder notice was sent on May 11.Almost immediately after the initial e-mail notification was sent, there was a steady stream of messages back to the director and the local survey coordinator. LibQUAL+ implementation at the University of Washington ultimately generated more than fifty e-mail messages, most coming from faculty members. The messages fell into two basic groups: technical problems trying to complete the survey, and comments, usually negative, on survey design and content. 610 LIBRARY TRENDS/SPRING 2001 Direct expenses were $2,000 for the UW Libraries paid as a participant in the ARL project. This worked out to be about $5 per completed survey (excluding surveys from library staff). Library staff contributed about 150 hours to the project, including responding to e-mail messages, analysis, and report writing. SURVEYRESPONSEAND REPRESENTATIVES Survey return rates for the 1992, 1995, and 1998 UW Libraries’ surveys and the 2000 LibQUAL+ survey are shown in Table 1. A second survey mailing appeared effective in raising the response rate as seen in the 1992 return rates for students and for all groups in 1998. The number of faculty surveyed varied according to criteria used to define the faculty pool, but all surveys included tenure track and research faculty as well as full-time lecturers. The overall response rate as shown is slightly understated as undeliverable surveys were not subtracted from the total sent out. Undeliverable survey rates ranged from approximately .5 percent of faculty to 2 percent of undergraduate students. Response rates to the LibQUALt survey were substantially lower. The definition of faculty was the same as used in the UW Libraries’ 1998 survey. LibQUALt response rates were calculated by matching the number of completed surveys against the number of e-mail addresses to which the survey message was sent. Approximately 1percent of these messages were undeliverable. Representatiueness of Survey Respondents The large number of responses to the UW Libraries’ surveys generated correspondingly large data sets, especially for the faculty survey. As Table 2 shows, the faculty survey respondent population in 1998 was reasonably representative of the population as a whole when grouped by broad subject areas. Faculty in the Health Sciences were slightly underrepresented, while those in the Humanities/Social Sciences/Fine Arts group were somewhat over-represented compared to the actual population. Response rates by academic schools ranged from 31 percent in Business to 54 percent in the Social Science departments within the College of Arts and Sciences. Graduate student responses (Table 3) were similar to the facultywith Health Sciences respondents again lower than their percentage of the actual population while those from Humanities/Arts/Social Sciences were slightly higher. Response rates by academic schools ranged from 24 percent in Dentistry and 28 percent in Education to 62 percent in Nursing and ’72 percent in Social Sciences. Health Sciences does have a larger proportion of faculty and graduate/professional students located away from the main UW campus, and this may be a factor in the underrepresentation of respondents from those areas. HILLER/ASSESSING USER NEEDS 611 612 LIBRARY TRENDS/SPRING 2001
منابع مشابه
A symbol-based fuzzy decision-making approach to evaluate the user satisfaction on services in academic digital libraries
Academic libraries play a significant role in providing core services that include research, teaching and learning. Usersatisfaction is an important indicator for evaluating the performance of library service. This paper develops a methodfor measuring the user satisfaction in a group decision-making environment. First, the performance of service isevaluated by using questionnaire survey. The sc...
متن کاملارزیابی رضایت کاربران از خدمات کتابخانه های دانشگاه علوم پزشکی مازندران
Background and purpose: Assessment of user satisfaction on library services is a major challenge in achieving appropriate services. The aim of this study was to evaluate user satisfaction with library services at Mazandaran University of Medical Sciences based on LibQUAL model. Materials and methods: A descriptive study was conducted in 144 faculty members and students at Mazandaran University...
متن کاملInvestigating the Level of Observing the Evaluation Criteria for User Interface in library services providing to the blind and deaf users in the word
Purpose: Digital library user interfaces has a determining role in desirable performance of this kind of libraries. Digital Library service providers to the blind and deaf users will have their best performance when the users (deaf and blind users) could have a proper interaction with them. This study aims to evaluate and analyze the criteria related to user interface in digital libraries servi...
متن کاملBook Availability and Performance Measure in an Academic Library: The Case of the Walter Sisulu University (WSU) Library, Mthatha Campus, Gbade A. Alabi
Today’s academic libraries according to Simmonds and Andaleeb (1998) are confronted with challenges on several fronts, on-line information providers, multi-media products, document delivery services and other competitive sources of information as well as the evolving technological innovations which are apparently threatening their role and even their very survival. Academic libraries may have t...
متن کاملEvaluating the Quality of Optimal Privacy in the Study Spaces of Libraries and its Impact On the Satisfaction Rates of Consulting Individuals (Case Study : Public Library of Qazvin)
Privacy is one of the essential needs of the human being. And the balance between privacy and social interactions between individuals are influenced by the architectural elements enriched by cultural values of each society which would lead to a sense of satisfaction in environment as well. The scope of environmental psychology is on the relationship between human and the his/ her environments; ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Library Trends
دوره 49 شماره
صفحات -
تاریخ انتشار 2001